2 research outputs found

    Human Time Perception - Predictable visual stimuli are perceived earlier than unpredictable events

    Get PDF
    What is it that enables us to timely react to visual events despite the significant processing delays within the visual system? This delay is estimated to be already about 100ms in higher visual areas, a delay that is relevant if one needs to initiate fast reactions, such as catching a ball in flight or initiating an escape. To compensate for delays in motor behavior, the brain employs predictive mechanisms. We aimed to investigate whether predictability of a visual stimulus not only affects behavior but also the time of perceived stimulus onset in humans. Specifically, we hypothesized that predictable visual stimuli have an earlier perceived onset than nonpredictable stimuli do. Our approach was the following: Subjects viewed streams of individual letters with a 1000ms standard interval between letters. This sequence of letters was either in alphabetical order, and thus predictable, or alternatively, the last letter of a sequence was chosen at random and thus not predictable. In each trial, subjects had to indicate whether or not the last letter agreed with the alphabetical order. Moreover, subjects had to estimate whether the duration of the last test interval, which was of varying length, was either longer or shorter than the standard interval. Varying the length of the last interval allowed us to estimate the point of subjective equivalence (PSE) between test and standard intervals. As we expected predictable letters to be perceived earlier, the PSE should be larger in predictable sequences than in nonpredictable ones. In other words, the test interval would need to be relatively longer in order to compensate for the earlier perceived onset of predictable as compared to unpredictable letters. Measurements of the PSEs confirmed our expectations and suggest that predictable visual stimuli are perceived earlier than non-predictable ones. Hence, even the perceptual system is compensating for delays in sensory information processing, allowing us to establish a timely perception of our environment. To shed light on the related neuronal correlates of delay compensation, we performed a magnetoencephalography (MEG) study while subjects performed the same task. We intended to investigate relative temporal differences between predictive and non-predictive visual evoked potentials (VEPs) in our MEG study. Analysis of MEG data suggests that participants did generate predictions. This is indicated by a late signal difference between the non-predictable and the predictable stimuli. However, we found no evidence for predictable stimuli evoking an earlier (or higher) peak of VEPs than unpredictable stimuli. Ultimately, it remains open whether the earlier perception of predictive vs. non-predictive stimuli is mediated through sensory prediction, through an interplay of prediction and postdictive perceptual evaluations, or through prediction and some yet unknown delay compensation mechanism

    It was (not) me: Causal Inference of Agency in goal-directed actions

    Get PDF
    Summary: 
The perception of one’s own actions depends on both sensory information and predictions derived from internal forward models [1]. The integration of these information sources depends critically on whether perceptual consequences are associated with one’s own action (sense of agency) or with changes in the external world that are not related to the action. The perceived effects of actions should thus critically depend on the consistency between the predicted and the actual sensory consequences of actions. To test this idea, we used a virtual-reality setup to manipulate the consistency between pointing movements and their visual consequences and investigated the influence of this manipulation on self-action perception. We then asked whether a Bayesian causal inference model, which assumes a latent agency variable controlling the attributed influence of the own action on perceptual consequences [2,3], would account for the empirical data: if the percept was attributed to the own action, visual and internal information should fuse in a Bayesian optimal manner, while this should not be the case if the visual stimulus was attributed to external influences. The model correctly fits the data, showing that small deviations between predicted and actual sensory information were still attributed to one’s own action, while this was not the case for large deviations when subjects relied more on internal information. We discuss the performance of this causal inference model in comparison to alternative biologically feasible statistical models applying methods for Bayesian model comparison.

Experiment: 
Participants were seated in front of a horizontal board on which their right hand was placed with the index finger on a haptic marker, representing the starting point for each trial. Participants were instructed to execute straight, fast (quasi-ballistic) pointing movements of fixed amplitude, but without an explicit visual target. The hand was obstructed from the view of the participants, and visual feedback about the peripheral part of the movement was provided by a cursor. Feedback was either veridical or rotated against the true direction of the hand movement by predefined angles. After each trial participants were asked to report the subjectively experienced direction of the executed hand movement by placing a mouse-cursor into that direction.

Model: 
We compared two probabilistic models: Both include a binary random gating variable (agency) that models the sense of ‘agency’; that is the belief that the visual feedback is influenced by the subject’s motor action. The first model assumes that both the visual feedback xv and the internal motor state estimate xe are directly caused by the (unobserved) real motor state xt (Fig. 1). The second model assumes instead that the expected visual feedback depends on the perceived direction of the own motor action xe (Fig. 2). 
Results: Both models are in good agreement with the data. Fig. A shows the model fit for Model 1 superpositioned to the data from a single subject. Fig. B shows the belief that the visual stimulus was influenced by the own action, which decreases for large deviations between predicted and real visual feedback. Bayesian model comparison shows a better fit for model 1.
Citations
[1] Wolpert D.M, Ghahramani, Z, Jordan, M. (1995) Science, 269, 1880-1882.
[2] Körding KP, Beierholm E, Ma WJ, Quartz S, Tenenbaum JB, et al (2007) PLoS ONE 2(9): e943.
[3] Shams, L., Beierholm, U. (2010) TiCS, 14: 425-432.
Acknowledgements
This work was supported by the BCCN Tübingen (FKZ: 01GQ1002), the CIN Tübingen, the European Union (FP7-ICT-215866 project SEARISE), the DFG and the Hermann and Lilly Schilling Foundation
    corecore